Tute (Completed) (Week 4)
Table of Contents
1. TinyImp
1.1. Static semantics
Ideally, static semantics judgements such as the \(\mathbf{ok}\) judgements should allow us to make some predictions about the dynamic behaviour of the program. In this case, we'd hope that an \(\mathbf{ok}\) program would be guaranteed not to misbehave in some way.
- Give an example of a program where all variables are initialised before use, but that is nonetheless not \(\mathbf{ok}\).
Suppose we have proven that
\[ \emptyset;\emptyset\vdash s\;\mathbf{ok} \leadsto \emptyset\]
- The W4 Monday notes describe three different semantics for uninitialised variables: crash-and-burn, default values and junk data. Under each of these interpretations, state (informally, in English) some desirable property about the runtime behaviour of \(s\) that follows as a result of the \(\mathbf{ok}\) judgement above.
- Can you think of a way to formalise these properties as logical statements involving judgements?
1:
For example, this program: \[\begin{array}{l} \mathbf{var}\;x\;\cdot \\ \mathbf{if}\;1\;\mathbf{then}\\ \quad x:=1\\ \mathbf{else}\\ \quad \mathbf{skip}\\ \mathbf{fi}; \\ x := x + 1 \end{array} \]
We know that the else-branch will never be taken, but the \(\mathbf{ok}\) judgement doesn't take this into account.
2:
Under all of these semantics, we know that no variable is ever read or written outside its scope.
Under crash-and-burn semantics, an \(\mathbf{ok}\) program will never have execution aborted by accessing an uninitialised variable.
Under junk data semantics, an \(\mathbf{ok}\) program will have deterministic evaluation.
3:
"No variable used outside its scope" can be stated syntactically as follows:
\[\mathit{FV}(s) = \emptyset\]
Semantically, we can say, e.g., that the result of evaluation doesn't depend on any of the free variables in the state:
\[\forall \sigma,\sigma',x. (\sigma,s) \Downarrow \sigma' \;\Rightarrow\; (\sigma|_x,s) \Downarrow \sigma'|_x\]
Deterministic evaluation can be stated as follows:
\[\forall \sigma_1,\sigma_2,\sigma_3.\;(\sigma_1,s) \Downarrow \sigma_2 \;\wedge\; (\sigma_1,s) \Downarrow \sigma_3 \;\Rightarrow\; \sigma_2 = \sigma_3\]
To say "no aborts due to uninitialised variables", it would be tempting to say that \(s\) will evaluate to something:
\[\forall \sigma. \exists \sigma'. (\sigma,s) \Downarrow \sigma'\]
Unfortunately, this doesn't quite work. The reason is that there are \(\mathbf{ok}\) programs that do not evaluate to any state, because they run forever (diverge). For example, we have
\[\forall \sigma,\sigma'. \neg((\sigma,\mathbf{while}\ 1\ \mathbf{do}\ \mathbf{skip}\ \mathbf{od}) \Downarrow \sigma')\]
This means that our big-step semantics is not adequate here, because it cannot distinguish divergence from failure.
If we had a small-step semantics \(\mapsto\) for TinyImp, we could state our desired property as follows: for all \(\sigma\), every maximal trace starting from \((\sigma,s)\) is complete.
1.2. Associativity of sequential composition
We never bothered specifying whether sequential composition in TinyImp is left-associative or right-associative. That is, whether \(s_1;s_2;s_3\) should be evaluated as \((s_1; s_2); s_3\), or as \(s_1; (s_2; s_3)\).
Prove that \((\sigma_1, (s_1;s_2);s_3) \Downarrow \sigma_2\) holds iff \((\sigma_1, s_1;(s_2;s_3)) \Downarrow \sigma_2\)
We show only the \(\Rightarrow\) direction. The \(\Leftarrow\) direction is similar.
Assume \((\sigma_1, (s_1;s_2);s_3) \Downarrow \sigma_2\). We proceed by case analysis on the derivation of this judgement. The only rule that is applicable is the sequential composition rule, which means there must exists a \(\sigma_3\) such that
\[(\sigma_1, s_1;s_2) \Downarrow \sigma_3 \qquad (1)\]
and
\[(\sigma_3, s_3) \Downarrow \sigma_2 \qquad (2)\]
Further case analysis on the derivation of (1) yields that there must exists \(\sigma_4\) such that
\[(\sigma_1, s_1) \Downarrow \sigma_4 \qquad (3)\]
and
\[(\sigma_4, s_2) \Downarrow \sigma_3 \qquad (4)\]
We can then complete the proof by constructing the following derivation:
\[ \dfrac {\dfrac{}{(\sigma_1, s_1) \Downarrow \sigma_4}(2) \quad \dfrac{ \dfrac {} {(\sigma_4, s_2) \Downarrow \sigma_3}(3) \quad \dfrac {} {(\sigma_3, s_3) \Downarrow \sigma_2}(4) } {(\sigma_1, s_2;s_3) \Downarrow \sigma_2} } {(\sigma_1, s_1;(s_2;s_3)) \Downarrow \sigma_2} \]
1.3. A different looping construct
Suppose we wanted to extend TinyImp with a new type of loop:
\[ \mathbf{do}\ s\ \mathbf{until}\ e\]
Which, running the loop body \(s\) at least once, checks the guard of the loop after the loop body has executed, terminating if it is true (i.e. nonzero).
- Give big-step semantics rules for this construct in the style of the lecture slides.
- Propose a translation of this construct into existing TinyImp constructs.
- Validate your translation by showing that the Hoare logic rule for this loop is derivable for your translation:
\[ \dfrac{ \{ \varphi \}\ s\ \{ \varphi \} }{\{ \varphi \}\ \mathbf{do}\ s\ \mathbf{until}\ e\ \{ \varphi \land e \} } \]
1:
\[ \dfrac{ (\sigma_1, s) \Downarrow \sigma_2 \quad \sigma_2 \vdash e \Downarrow v \quad v \neq 0 }{ (\sigma_1, \mathbf{do}\ s\ \mathbf{until}\ e) \Downarrow \sigma_2 } \]
\[ \dfrac{ (\sigma_1, s) \Downarrow \sigma_2 \quad \sigma_2 \vdash e \Downarrow 0 \quad (\sigma_2, \mathbf{do}\ s\ \mathbf{until}\ e) \Downarrow \sigma_3 }{ (\sigma_1, \mathbf{do}\ s\ \mathbf{until}\ e) \Downarrow \sigma_3 } \] 2:
\[ \mathbf{do}\ s\ \mathbf{until}\ e \equiv s; \mathbf{while}\ \neg e\ \mathbf{do}\ s\ \mathbf{od}\]
3:
\[ \dfrac{ \dfrac{ }{\{ \varphi \}\ s\ \{ \varphi \}}\quad \dfrac{ \dfrac{ \dfrac{ \dfrac{}{\{ \varphi \}\ s\ \{ \varphi \} } }{ \{ \varphi \land \neg e \}\ s\ \{ \varphi \} }\text{Con} }{\{ \varphi \}\ \mathbf{while}\ \neg e\ \mathbf{do}\ s\ \mathbf{od}\ \{ \varphi \land \neg (\neg e) \}}\text{Loop} }{\{ \varphi \}\ \mathbf{while}\ \neg e\ \mathbf{do}\ s\ \mathbf{od}\ \{ \varphi \land e \}}\text{Con} }{\{ \varphi \}\ s; \mathbf{while}\ \neg e\ \mathbf{do}\ s\ \mathbf{od}\ \{ \varphi \land e \} }\text{Seq} \]